Goto

Collaborating Authors

 unseen manipulation





A Derivation Details

Neural Information Processing Systems

ELBO objective (3) presented in the main text. Firstly, the latent variables have very different meanings. Another important contribution of the paper is the generalization of deep CAMA to generic measurement data. We also performed experiments using different DNN network architectures. Figure 15 shows the performance against different shifts.



addition, we will make the code publicly available, together with the paper

Neural Information Processing Systems

We thank the reviewers for their time and insightful comments on our paper. We respond to each reviewer(R) below. R1's comments cover only part of contribution (2), R3 also pointed out "The proposed fine-tuning phase to learn unseen M R1.decomposition necessity and CV AE comparisons: We emphasise that CV AE and other mentioned work can R2. Comparisons to IRM: First, IRM only considers single modality data. R3. (adversarial) data augmentation: Deep CAMA also benefits from adversarial training (Figure 10).


A Causal View on Robustness of Neural Networks

Zhang, Cheng, Zhang, Kun, Li, Yingzhen

arXiv.org Machine Learning

We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models possible manipulations on certain causes leading to changes in the observed effect. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.